Click Here!
home account info subscribe login search My ITKnowledge FAQ/help site map contact us


 
Brief Full
 Advanced
      Search
 Search Tips
To access the contents, click the chapter and section titles.

Oracle Performance Tuning and Optimization
(Publisher: Macmillan Computer Publishing)
Author(s): Edward Whalen
ISBN: 067230886x
Publication Date: 04/01/96

Bookmark It

Search this book:
 
Previous Table of Contents Next


Miscellaneous

Over the years, many features have been added to the different versions of UNIX to improve Oracle performance as a result of performance tuning and testing. Many of these features have been the result of benchmarking efforts. Benchmarking is probably the best way to accurately measure the performance of the system and any gains that occur when new features are added. Although it is unusual for an RDBMS vendor to put a feature into their product solely for the purpose of improving performance on a single benchmark, it has happened. However, since the TPC put a clause into each benchmark specification prohibiting “benchmark-special” features, this has not occurred.

The best way to judge the effect of a new feature is in a controlled environment. By using a benchmark or a custom test you have developed, you can quantifiably measure performance gains. Not only are features added to Oracle but to the OS as well. The following sections list some of these features.

Post-Wait Semaphore

The post-wait semaphore is designed to allow Oracle to have more control over the scheduling of processes (refer to Chapter 11, “Tuning the Server Operating System”). This new type of semaphore allows Oracle to signal sleeping processes and start them working again, significantly reducing idle CPU cycles. The post-wait semaphore is available in most varieties of the UNIX operating system except for Solaris. In Solaris, a different scheduling algorithm and high-speed semaphores have made the post-wait semaphore unnecessary.

The post-wait semaphore was developed by Oracle and several UNIX operating system vendors. Post-wait consists of a device driver and a post-wait device. Access to the post-wait device is through an ioctl call into that driver. When the post-wait driver is enabled, both within the OS and within Oracle, the use of traditional semaphores and sleep or alarm calls is eliminated.

To enable post-wait within Oracle, set the initialization parameter USE_POST_WAIT_DRIVER to TRUE; set the initialization parameter POST_WAIT_DEVICE to the name of the post-wait device. The default initialization file provided with Oracle should have the default post-wait device name (typically /dev/pw). See your installation and tuning guide for details.

Post-wait should offer immediate and fairly significant performance improvements. No risk or performance degradation can occur from using the post-wait semaphore. I recommend that you always use post-wait when it is available.

Intimate Shared Memory

Intimate Shared Memory (ISM) is available on the Solaris operating system. ISM allows for sharing of page tables for shared memory. This feature is also used as the vehicle to enable the 4M pages described earlier in this chapter. ISM can also lock pages into memory, reducing the overhead for doing I/O. Oracle enables this feature when the initialization parameter USE_ISM is set to TRUE.

When it is available, you should turn on ISM to enable the 4M pages and reduce overhead. By locking the shared memory region into memory and making 4M contiguous pages available, overhead is reduced and performance is enhanced.

Scheduler

Several scheduling options can enhance Oracle’s performance on the UNIX operating system (refer to Chapter 11). These options include the disabling of preemptive scheduling, load balancing, and cache affinity.

By disabling preemptive scheduling, you can take advantage of the fact that typical Oracle processes are of short duration. When you preempt short-running processes, you can waste CPU cycles performing the task switch when, in fact, the running process most likely would have relinquished the CPU in a short time anyway. With preemptive scheduling, you can allow processes to run to completion. In this case, completion usually means that the process does an I/O operation, relinquishes the CPU, and goes to sleep waiting for the I/O to complete.

Preemptive scheduling is disabled in SCO UNIX by setting both the preemptive and load_balance variables to zero in the file /etc/conf/pack.d/crllry/space.c. These two parameters are similar. When preemptive = 1, a process on the same processor can preempt a lower-priority process on that CPU. When load_balance = 1, a process on one CPU can preempt a process on another CPU. By disabling these variables, processes run to completion.


CAUTION:  When you disable preemptive scheduling, there is a danger that a runaway process may run until its maximum time slice has been used. By default, this is set to 1 second. You can avoid this by setting the UNIX parameter MAXSLICE to 3 or 4. MAXSLICE represents the maximum number of ticks a process can run before it must relinquish the CPU. A tick in UNIX is typically 10 milliseconds. If you have very fast processors, you may be able to set this parameter to 1 or 2. Even when MAXSLICE is set to 1 or 2, it is unlikely that a process will hit that limit.

By disabling cache affinity, you may be able to reduce some unnecessary overhead. Cache affinity is used to attempt to run a process on the last processor it ran on. The theory behind this feature is that if any data is left in the CPU cache from the last time the process ran, the process will be able to take advantage of it. Running Oracle this way may or may not be a good thing.

If you have a heavily loaded machine with a large number of users (such as in a large OLTP system), it is extremely unlikely that there will be any data left in the CPU cache when it is time for your process to run again. In effect, you have just wasted many CPU cycles in the OS scheduler trying to relocate a process for no good reason.

On the other hand, if you are running a large application with a small number of processes (such as a DSS system), you may benefit from cache affinity. If you have a fair number of processes all running the same shared code, you will also see a benefit from cache affinity.

By carefully analyzing your workload, you should come to a logical conclusion about whether cache affinity will be a benefit for you. If you are not sure, it is probably best to leave the system in its default configuration.

UNIX Summary

Over the years, many features have been added to the various implementations of UNIX to improve the performance of Oracle. Part of the reason that these improvements have occurred is that many hardware vendors have a proprietary UNIX implementation and these vendors have worked closely with Oracle to improve RDBMS performance. Because UNIX has been around and stable for many years, there has been ample opportunity to test and analyze system performance.

As you have noticed, there is much more tuning and configuration involved when using the UNIX operating system than there is when you use NetWare or Windows NT. When choosing an OS, you must weigh the importance of configurability and control versus ease of use.

Recently, SCO UNIX merged with Novell UnixWare and Hewlett Packard. As a result of this merger, a new operating system will evolve in the next few years. This new operating system will have some of the characteristics of SCO UNIX and UnixWare. All the tuning concepts discussed here will still apply; however, the specific parameters may differ slightly.


Previous Table of Contents Next


Products |  Contact Us |  About Us |  Privacy  |  Ad Info  |  Home

Use of this site is subject to certain Terms & Conditions, Copyright © 1996-2000 EarthWeb Inc.
All rights reserved. Reproduction whole or in part in any form or medium without express written permission of EarthWeb is prohibited. Read EarthWeb's privacy statement.